Speech systems are sensitive to accent variations. This is especially challenging in the Indian context, with an abundance of languages but a dearth of linguistic studies characterising pronunciation variations. The growing number of L2 English speakers in India reinforces the need to study accents and L1-L2 interactions. We investigate the accents of Indian English (IE) speakers and report in detail our observations, both specific and common to all regions. In particular, we observe the phonemic variations and phonotactics occurring in the speakers' native languages and apply this to their English pronunciations. We demonstrate the influence of 18 Indian languages on IE by comparing the native language pronunciations with IE pronunciations obtained jointly from existing literature studies and phonetically annotated speech of 80 speakers. Consequently, we are able to validate the intuitions of Indian language influences on IE pronunciations by justifying pronunciation rules from the perspective of Indian language phonology. We obtain a comprehensive description in terms of universal and region-specific characteristics of IE, which facilitates accent conversion and adaptation of existing ASR and TTS systems to different Indian accents.
translated by 谷歌翻译
Developing and least developed countries face the dire challenge of ensuring that each child in their country receives required doses of vaccination, adequate nutrition and proper medication. International agencies such as UNICEF, WHO and WFP, among other organizations, strive to find innovative solutions to determine which child has received the benefits and which have not. Biometric recognition systems have been sought out to help solve this problem. To that end, this report establishes a baseline accuracy of a commercial contactless palmprint recognition system that may be deployed for recognizing children in the age group of one to five years old. On a database of contactless palmprint images of one thousand unique palms from 500 children, we establish SOTA authentication accuracy of 90.85% @ FAR of 0.01%, rank-1 identification accuracy of 99.0% (closed set), and FPIR=0.01 @ FNIR=0.3 for open-set identification using PalmMobile SDK from Armatura.
translated by 谷歌翻译
The use of vision transformers (ViT) in computer vision is increasing due to limited inductive biases (e.g., locality, weight sharing, etc.) and increased scalability compared to other deep learning methods. This has led to some initial studies on the use of ViT for biometric recognition, including fingerprint recognition. In this work, we improve on these initial studies for transformers in fingerprint recognition by i.) evaluating additional attention-based architectures, ii.) scaling to larger and more diverse training and evaluation datasets, and iii.) combining the complimentary representations of attention-based and CNN-based embeddings for improved state-of-the-art (SOTA) fingerprint recognition (both authentication and identification). Our combined architecture, AFR-Net (Attention-Driven Fingerprint Recognition Network), outperforms several baseline transformer and CNN-based models, including a SOTA commercial fingerprint system, Verifinger v12.3, across intra-sensor, cross-sensor, and latent to rolled fingerprint matching datasets. Additionally, we propose a realignment strategy using local embeddings extracted from intermediate feature maps within the networks to refine the global embeddings in low certainty situations, which boosts the overall recognition accuracy significantly across each of the models. This realignment strategy requires no additional training and can be applied as a wrapper to any existing deep learning network (including attention-based, CNN-based, or both) to boost its performance.
translated by 谷歌翻译
深度神经网络(DNN)在学习指纹的固定长度表示方面表现出了不可思议的希望。由于表示学习通常集中在捕获特定的先验知识(例如细节)上,因此没有普遍的表示可以全面地封装在指纹中的所有歧视性信息。在学习一系列表示的过程中可以缓解这个问题,但需要解决两个关键的挑战:(i)如何从相同的指纹图像中提取多种不同的表示? (ii)如何在匹配过程中最佳利用这些表示形式?在这项工作中,我们在输入图像的不同转换上训练多个Deepprint(一种基于DNN的指纹编码器)的多个实例,以生成指纹嵌入的集合。我们还提出了一种功能融合技术,该技术将这些多个表示形式提炼成单个嵌入,该技术忠实地捕获了合奏中存在的多样性而不会增加计算复杂性。已在五个数据库中进行了全面评估所提出的方法,这些数据库包含滚动,普通和潜在的指纹(NIST SD4,NIST SD14,NIST SD14,NIST SD27,NIST SD302和FVC2004 DB2A)和统计上的显着改进,在验证范围内已始终如一地证明以及封闭式和开放设定的标识设置。提出的方法是能够提高任何基于DNN识别系统的准确性的包装器。
translated by 谷歌翻译
鉴于完整的指纹图像(滚动或拍打),我们介绍了Cyclegan模型,以生成与完整印刷相同身份的多个潜在印象。我们的模型可以控制生成的潜在打印图像中的失真,噪声,模糊和遮挡程度,以获得NIST SD27潜在数据库中介绍的好,坏和丑陋的潜在图像类别。我们的工作的贡献是双重的:(i)证明合成生成的潜在指纹图像与NIST SD27和MSP数据库中的犯罪现场潜伏期的相似性,并由NIST NIST NFIQ 2质量度量和由SOTA指纹匹配器和ROC曲线评估。 (ii)使用合成潜伏期在公共领域增强小型的潜在训练数据库,以提高Deepprint的性能,Deepprint是一种SOTA指纹匹配器,设计用于在三个潜在数据库上滚动的指纹匹配(NIST SD27,NIST SD302和IIITD,以及IIITD,以及IIITD,以及IIITD,以及-slf)。例如,随着合成潜在数据的增强,在具有挑战性的NIST SD27潜在数据库中,Deepprint的排名1检索性能从15.50%提高到29.07%。我们生成合成潜在指纹的方法可用于改善任何潜在匹配器及其单个组件的识别性能(例如增强,分割和特征提取)。
translated by 谷歌翻译
在这项工作中,我们研究了面部反动体组织(MD-FAS)的多域学习,其中需要更新预训练的FAS模型,以在源和目标域上同样表现出色,而仅使用目标域数据进行更新。我们为MD-FAS提供了一个新模型,该模型在学习新域数据时解决了遗忘问题,同时拥有高水平的适应性。首先,我们设计了一个简单而有效的模块,称为Spoof区域估计量(SRE),以识别欺骗图像中的欺骗痕迹。这种欺骗痕迹反映了源预先训练的模型的响应,该响应有助于升级模型在更新过程中打击灾难性遗忘。与先前的作品估计欺骗轨迹会产生多个输出或低分辨率二进制掩码,SRE以无监督的方式产生一个单一的,详细的像素估计值。其次,我们提出了一个名为FAS-Wrapper的新型框架,该框架从预先训练的模型中转移知识,并与不同的FAS模型无缝集成。最后,为了帮助社区进一步推进MD-FAS,我们基于SIW,SIW-MV2和Oulu-NPU构建了一个新的基准测试,并引入了四个不同的评估协议,其中源和目标域在欺骗类型,类型方面是不同的,年龄,种族和照明。我们提出的方法比以前的方法在MD-FAS基准上实现了卓越的性能。我们的代码和新策划的SIW-MV2公开可用。
translated by 谷歌翻译
尽管在面部识别方面取得了重大进展(FR),但由于半约束训练数据集和无约束的测试方案之间的域间隙,在不受约束的环境中FR仍然具有挑战性。为了解决此问题,我们提出了一个可控的面部合成模型(CFSM),该模型可以模仿样式潜在空间中目标数据集的分布。CFSM在样式潜在空间中学习了一个线性子空间,并具有对综合多样性和程度的精确控制。此外,预先训练的合成模型可以由FR模型指导,从而使所得图像对FR模型训练更有益。此外,目标数据集分布的特征是学到的正交碱基,可以用来测量面部数据集之间的分布相似性。我们的方法在不受约束的基准测试中获得了显着的性能提高,例如IJB-B,IJB-C,TinyFace和IJB-S(+5.76%rank1)。
translated by 谷歌翻译
Analysis of Indian English (IE) pronunciation variabilities are useful in building systems for Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) synthesis in the Indian context. Typically, these pronunciation variabilities have been explored by comparing IE pronunciation with Received Pronunciation (RP). However, to explore these variabilities, it is required to have labelled pronunciation data at the phonetic level, which is scarce for IE. Moreover, versatility of IE stems from the influence of a large diversity of the speakers' mother tongues and demographic region differences. Prior linguistic works have characterised features of IE variabilities qualitatively by reporting phonetic rules that represent such variations relative to RP. The qualitative descriptions often lack quantitative descriptors and data-driven analysis of diverse IE pronunciation data to characterise IE on the phonetic level. To address these issues, in this work, we consider a corpus, Indic TIMIT, containing a large set of IE varieties from 80 speakers from various regions of India. We present an analysis to obtain the new set of phonetic rules representing IE pronunciation variabilities relative to RP in a data-driven manner. We do this using 15,974 phonetic transcriptions, of which 13,632 were obtained manually in addition to those part of the corpus. Furthermore, we validate the rules obtained from the analysis against the existing phonetic rules to identify the relevance of the obtained phonetic rules and test the efficacy of Grapheme-to-Phoneme (G2P) conversion developed based on the obtained rules considering Phoneme Error Rate (PER) as the metric for performance.
translated by 谷歌翻译
人的步态被认为是一种独特的生物识别标识符,其可以在距离处以覆盖方式获取。但是,在受控场景中捕获的现有公共领域步态数据集接受的模型导致应用于现实世界无约束步态数据时的剧烈性能下降。另一方面,视频人员重新识别技术在大规模公共可用数据集中实现了有希望的性能。鉴于服装特性的多样性,衣物提示对于人们的认可不可靠。因此,实际上尚不清楚为什么最先进的人重新识别方法以及他们的工作。在本文中,我们通过从现有的视频人重新识别挑战中提取剪影来构建一个新的步态数据集,该挑战包括1,404人以不受约束的方式行走。基于该数据集,可以进行步态认可与人重新识别之间的一致和比较研究。鉴于我们的实验结果表明,目前在受控情景收集的数据下设计的目前的步态识别方法不适合真实监视情景,我们提出了一种名为Realgait的新型步态识别方法。我们的结果表明,在实际监视情景中识别人的步态是可行的,并且潜在的步态模式可能是视频人重新设计在实践中的真正原因。
translated by 谷歌翻译
在指纹识别领域工作的研究人员的主要障碍是缺乏公开的,大规模的指纹数据集。确实存在的公开数据集包含每个手指的少数身份和印象。这限制了关于许多主题的研究,包括例如,使用深网络来学习固定长度指纹嵌入。因此,我们提出了Printsgan,一种能够产生独特指纹的合成指纹发生器以及给定指纹的多个印象。使用Printsgan,我们合成525,000个指纹的数据库(35,000个不同的手指,每次有15个印象)。接下来,我们通过训练深网络来提取来自指纹的固定长度嵌入的固定长度来显示Printsgan生成的数据集的实用程序。特别是,对我们的合成指纹培训并进行微调的嵌入式模型和在NIST SD302的25,000个印刷品上进行微调)在NIST SD4数据库上获得87.03%的焦点为87.03%(一个升压)当仅在NIST SD302上培训时,来自Tar = 73.37%)。普遍的合成指纹产生方法不会使I)缺乏现实主义或ii)无法产生多个印象。我们计划向公众释放我们的合成指纹数据库。
translated by 谷歌翻译